259 research outputs found

    Secrets and lies: digital security in a networked world

    Get PDF
    Bestselling author Bruce Schneier offers his expert guidance on achieving security on a networkInternationally recognized computer security expert Bruce Schneier offers a practical, straightforward guide to achieving security throughout computer networks. Schneier uses his extensive field experience with his own clients to dispel the myths that often mislead IT managers as they try to build secure systems. This practical guide provides readers with a better understanding of why protecting information is harder in the digital world, what they need to know to protect digital information, how t

    Authenticating Secure Tokens Using Slow Memory Access

    Get PDF
    We present an authentication protocol that allows a token, such as a smart card, to authenticate itself to a back-end trusted computer system through an untrusted reader. This protocol relies on the fact that the token will only respond to queries slowly, and that the token owner will not sit patiently while the reader seems not to be working. This protocol can be used alone, with "dumb" memory tokens or with processor-based tokens

    Platforms, Encryption, and the CFAA: The Case of WhatsApp v NSO Group

    Get PDF
    End-to-end encryption technology has gone mainstream. But this wider use has led hackers, cybercriminals, foreign governments, and other threat actors to employ creative and novel attacks to compromise or workaround these protections, raising important questions as to how the Computer Fraud and Abuse Act (CFAA), the primary federal anti-hacking statute, is best applied to these new encryption implementations. Now, after the Supreme Court recently narrowed the CFAA’s scope in Van Buren and suggested it favors a code-based approach to liability under the statute, understanding how best to theorize sophisticated code-based access barriers like end-to-end encryption, and their circumvention, is now more important than ever.In this Article, we take up these very issues, using the recent case WhatsApp v. NSO Group as a case study to explore them. The case involves a lawsuit launched in 2019 by WhatsApp and Facebook against the cybersecurity firm NSO Group, whose spyware has been linked to surveillance of human rights activists, dissidents, journalists, and lawyers around the world, as well as the death of Washington Post journalist Jamal Khashoggi. The lawsuit, brought under the CFAA, alleged NSO Group launched a sophisticated hack that compromised countless WhatsApp users—many of which were journalists and activists abroad. Despite these broader human rights dimensions, the lawsuit’s reception among experts has been largely critical. We analyze WhatsApp’s CFAA claims to bring greater clarity to these issues and illustrate how best to theorize encrypted platforms and networks under the CFAA. In our view, the alleged attack on WhatsApp’s encrypted network is actionable under the CFAA and is best understood using what we call a network trespass theory of liability. Our theory and analysis clarifies the CFAA’s application, will lead to better human rights accountability and privacy and security outcomes, and provides guidance on critical post-Van Buren issues. This includes setting out a new approach to theorizing the scope and boundaries of computer systems, services, and information at issue, and taking the intended function of code-based access barriers into account when determining whether circumvention should trigger liability

    Platforms, Encryption, and the CFAA: The Case of WhatsApp v NSO Group

    Get PDF
    End-to-end encryption technology has gone mainstream. But this wider use has led hackers, cybercriminals, foreign governments, and other threat actors to employ creative and novel attacks to compromise or workaround these protections, raising important questions as to how the Computer Fraud and Abuse Act (CFAA), the primary federal anti-hacking statute, is best applied to these new encryption implementations. Now, after the Supreme Court recently narrowed the CFAA’s scope in Van Buren and suggested it favors a code-based approach to liability under the statute, understanding how best to theorize sophisticated code-based access barriers like end-to-end encryption, and their circumvention, is now more important than ever. In this Article, we take up these very issues, using the recent case WhatsApp v. NSO Group as a case study to explore them. The case involves a lawsuit launched in 2019 by WhatsApp and Facebook against the cybersecurity firm NSO Group, whose spyware has been linked to surveillance of human rights activists, dissidents, journalists, and lawyers around the world, as well as the death of Washington Post journalist Jamal Khashoggi. The lawsuit, brought under the CFAA, alleged NSO Group launched a sophisticated hack that compromised countless WhatsApp users—many of which were journalists and activists abroad. Despite these broader human rights dimensions, the lawsuit’s reception among experts has been largely critical. We analyze WhatsApp’s CFAA claims to bring greater clarity to these issues and illustrate how best to theorize encrypted platforms and networks under the CFAA. In our view, the alleged attack on WhatsApp’s encrypted network is actionable under the CFAA and is best understood using what we call a network trespass theory of liability. Our theory and analysis clarifies the CFAA’s application, will lead to better human rights accountability and privacy and security outcomes, and provides guidance on critical post-Van Buren issues. This includes setting out a new approach to theorizing the scope and boundaries of computer systems, services, and information at issue, and taking the intended function of code-based access barriers into account when determining whether circumvention should trigger liability

    AES Key Agility Issues in High-Speed IPsec Implementations

    Get PDF
    Some high-speed IPsec hardware systems need to support many thousands of security associations. The cost of switching among different encryption keys can dramatically affect throughput, particularly for the very common case of small packets. Three of the AES finalists (Rijndael, Serpent, and Twofish) provide very high key agility, as is required for such applications. The other two candidates (MARS, RC6) exhibit low key agility and may not be appropriate for use in such equipment

    Demonstrations of the Potential of AI-based Political Issue Polling

    Full text link
    Political polling is a multi-billion dollar industry with outsized influence on the societal trajectory of the United States and nations around the world. However, it has been challenged by factors that stress its cost, availability, and accuracy. At the same time, artificial intelligence (AI) chatbots have become compelling stand-ins for human behavior, powered by increasingly sophisticated large language models (LLMs). Could AI chatbots be an effective tool for anticipating public opinion on controversial issues to the extent that they could be used by campaigns, interest groups, and polling firms? We have developed a prompt engineering methodology for eliciting human-like survey responses from ChatGPT, which simulate the response to a policy question of a person described by a set of demographic factors, and produce both an ordinal numeric response score and a textual justification. We execute large scale experiments, querying for thousands of simulated responses at a cost far lower than human surveys. We compare simulated data to human issue polling data from the Cooperative Election Study (CES). We find that ChatGPT is effective at anticipating both the mean level and distribution of public opinion on a variety of policy issues such as abortion bans and approval of the US Supreme Court, particularly in their ideological breakdown (correlation typically >85%). However, it is less successful at anticipating demographic-level differences. Moreover, ChatGPT tends to overgeneralize to new policy issues that arose after its training data was collected, such as US support for involvement in the war in Ukraine. Our work has implications for our understanding of the strengths and limitations of the current generation of AI chatbots as virtual publics or online listening platforms, future directions for LLM development, and applications of AI tools to the political domain. (Abridged)Comment: 23 pages, 7 figure

    Devising and Detecting Phishing: Large Language Models vs. Smaller Human Models

    Full text link
    AI programs, built using large language models, make it possible to automatically create phishing emails based on a few data points about a user. They stand in contrast to traditional phishing emails that hackers manually design using general rules gleaned from experience. The V-Triad is an advanced set of rules for manually designing phishing emails to exploit our cognitive heuristics and biases. In this study, we compare the performance of phishing emails created automatically by GPT-4 and manually using the V-Triad. We also combine GPT-4 with the V-Triad to assess their combined potential. A fourth group, exposed to generic phishing emails, was our control group. We utilized a factorial approach, sending emails to 112 randomly selected participants recruited for the study. The control group emails received a click-through rate between 19-28%, the GPT-generated emails 30-44%, emails generated by the V-Triad 69-79%, and emails generated by GPT and the V-Triad 43-81%. Each participant was asked to explain for why they pressed or did not press a link in the email. These answers often contradict each other, highlighting the need for personalized content. The cues that make one person avoid phishing emails make another person fall for them. Next, we used four popular large language models (GPT, Claude, PaLM, and LLaMA) to detect the intention of phishing emails and compare the results to human detection. The language models demonstrated a strong ability to detect malicious intent, even in non-obvious phishing emails. They sometimes surpassed human detection, although often being slightly less accurate than humans

    Legal Risks of Adversarial Machine Learning Research

    Get PDF
    Adversarial machine learning is the systematic study of how motivated adversaries can compromise the confidentiality, integrity, and availability of machine learning (ML) systems through targeted or blanket attacks. The problem of attacking ML systems is so prevalent that CERT, the federally funded research and development center tasked with studying attacks, issued a broad vulnerability note on how most ML classifiers are vulnerable to adversarial manipulation. Google, IBM, Facebook, and Microsoft have committed to investing in securing machine learning systems. The US and EU are likewise putting security and safety of AI systems as a top priority.Now, research on adversarial machine learning is booming but it is not without risks. Studying or testing the security of any operational system may violate the Computer Fraud and Abuse Act (CFAA), the primary United States federal statute that creates liability for hacking. The CFAA’s broad scope, rigid requirements, and heavy penalties, critics argue, has a chilling effect on security research. Adversarial ML security research is likely no different. However, prior work on adversarial ML research and the CFAA is sparse and narrowly focused. In this article, we help address this gap in the literature. For legal practitioners, we describe the complex and confusing legal landscape of applying the CFAA to adversarial ML. For adversarial ML researchers, we describe the potential risks of conducting adversarial ML research. We also conclude with an analysis predicting how the US Supreme Court may resolve some present inconsistencies in the CFAA’s application in Van Buren v. United States, an appeal expected to be decided in 2021. We argue that the court is likely to adopt a narrow construction of the CFAA, and that this will actually lead to better adversarial ML security outcomes in the long term
    • …
    corecore